Scalable Non-linear Learning with Adaptive Polynomial Expansions

نویسندگان

  • Alekh Agarwal
  • Alina Beygelzimer
  • Daniel J. Hsu
  • John Langford
  • Matus Telgarsky
چکیده

Can we effectively learn a nonlinear representation in time comparable to linear learning? We describe a new algorithm that explicitly and adaptively expands higher-order interaction features over base linear representations. The algorithm is designed for extreme computational efficiency, and an extensive experimental study shows that its computation/prediction tradeoff ability compares very favorably against strong baselines.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Analytic Regularity and GPC Approximation for Control Problems Constrained by Linear Parametric Elliptic and Parabolic PDEs

This paper deals with linear-quadratic optimal control problems constrained by a parametric or stochastic elliptic or parabolic PDE. We address the (difficult) case that the number of parameters may be countable infinite, i.e., σj with j ∈ N, and that the PDE operator may depend non-affinely on the parameters. We consider tracking-type functionals and distributed as well as boundary controls. B...

متن کامل

Adaptive sparse polynomial chaos expansion based on least angle regression

Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e. of Galerkin typ...

متن کامل

Adaptive Basis Function Construction: An Approach for Adaptive Building of Sparse Polynomial Regression Models 127 Adaptive Basis Function Construction: An Approach for Adaptive Building of Sparse Polynomial Regression Models

The task of learning useful models from available data is common in virtually all fields of science, engineering, and finance. The goal of the learning task is to estimate unknown (input, output) dependency (or model) from training data (consisting of a finite number of samples) with good prediction (generalization) capabilities for future (test) data (Cherkassky & Mulier, 2007; Hastie et al., ...

متن کامل

Large-scale log-determinant computation through stochastic Chebyshev expansions

Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning. Log-determinant computation involves the Cholesky decomposition at the cost cubic in the number of variables,...

متن کامل

Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification

Adaptive Sparse Grid Approaches to Polynomial Chaos Expansions for Uncertainty Quantification by Justin Gregory Winokur Department of Mechanical Engineering & Materials Science Duke University Date: Approved: Omar M. Knio, Supervisor

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014